280 research outputs found
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning
Visual language grounding is widely studied in modern neural image captioning
systems, which typically adopts an encoder-decoder framework consisting of two
principal components: a convolutional neural network (CNN) for image feature
extraction and a recurrent neural network (RNN) for language caption
generation. To study the robustness of language grounding to adversarial
perturbations in machine vision and perception, we propose Show-and-Fool, a
novel algorithm for crafting adversarial examples in neural image captioning.
The proposed algorithm provides two evaluation approaches, which check whether
neural image captioning systems can be mislead to output some randomly chosen
captions or keywords. Our extensive experiments show that our algorithm can
successfully craft visually-similar adversarial examples with randomly targeted
captions or keywords, and the adversarial examples can be made highly
transferable to other image captioning systems. Consequently, our approach
leads to new robustness implications of neural image captioning and novel
insights in visual language grounding.Comment: Accepted by 56th Annual Meeting of the Association for Computational
Linguistics (ACL 2018). Hongge Chen and Huan Zhang contribute equally to this
wor
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Deep neural networks (DNNs) are one of the most prominent technologies of our
time, as they achieve state-of-the-art performance in many machine learning
tasks, including but not limited to image classification, text mining, and
speech processing. However, recent research on DNNs has indicated
ever-increasing concern on the robustness to adversarial examples, especially
for security-critical tasks such as traffic sign identification for autonomous
driving. Studies have unveiled the vulnerability of a well-trained DNN by
demonstrating the ability of generating barely noticeable (to both human and
machines) adversarial images that lead to misclassification. Furthermore,
researchers have shown that these adversarial images are highly transferable by
simply training and attacking a substitute model built upon the target model,
known as a black-box attack to DNNs.
Similar to the setting of training substitute models, in this paper we
propose an effective black-box attack that also only has access to the input
(images) and the output (confidence scores) of a targeted DNN. However,
different from leveraging attack transferability from substitute models, we
propose zeroth order optimization (ZOO) based attacks to directly estimate the
gradients of the targeted DNN for generating adversarial examples. We use
zeroth order stochastic coordinate descent along with dimension reduction,
hierarchical attack and importance sampling techniques to efficiently attack
black-box models. By exploiting zeroth order optimization, improved attacks to
the targeted DNN can be accomplished, sparing the need for training substitute
models and avoiding the loss in attack transferability. Experimental results on
MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective
as the state-of-the-art white-box attack and significantly outperforms existing
black-box attacks via substitute models.Comment: Accepted by 10th ACM Workshop on Artificial Intelligence and Security
(AISEC) with the 24th ACM Conference on Computer and Communications Security
(CCS
Efficient Neural Network Robustness Certification with General Activation Functions
Finding minimum distortion of adversarial examples and thus certifying
robustness in neural network classifiers for given data points is known to be a
challenging problem. Nevertheless, recently it has been shown to be possible to
give a non-trivial certified lower bound of minimum adversarial distortion, and
some recent progress has been made towards this direction by exploiting the
piece-wise linear nature of ReLU activations. However, a generic robustness
certification for general activation functions still remains largely
unexplored. To address this issue, in this paper we introduce CROWN, a general
framework to certify robustness of neural networks with general activation
functions for given input data points. The novelty in our algorithm consists of
bounding a given activation function with linear and quadratic functions, hence
allowing it to tackle general activation functions including but not limited to
four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we
facilitate the search for a tighter certified lower bound by adaptively
selecting appropriate surrogates for each neuron activation. Experimental
results show that CROWN on ReLU networks can notably improve the certified
lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while
having comparable computational efficiency. Furthermore, CROWN also
demonstrates its effectiveness and flexibility on networks with general
activation functions, including tanh, sigmoid and arctan.Comment: Accepted by NIPS 2018. Huan Zhang and Tsui-Wei Weng contributed
equall
On the Adversarial Robustness of Vision Transformers
Following the success in advancing natural language processing and
understanding, transformers are expected to bring revolutionary changes to
computer vision. This work provides the first and comprehensive study on the
robustness of vision transformers (ViTs) against adversarial perturbations.
Tested on various white-box and transfer attack settings, we find that ViTs
possess better adversarial robustness when compared with convolutional neural
networks (CNNs). This observation also holds for certified robustness. We
summarize the following main observations contributing to the improved
robustness of ViTs:
1) Features learned by ViTs contain less low-level information and are more
generalizable, which contributes to superior robustness against adversarial
perturbations.
2) Introducing convolutional or tokens-to-token blocks for learning low-level
features in ViTs can improve classification accuracy but at the cost of
adversarial robustness.
3) Increasing the proportion of transformers in the model structure (when the
model consists of both transformer and CNN blocks) leads to better robustness.
But for a pure transformer model, simply increasing the size or adding layers
cannot guarantee a similar effect.
4) Pre-training on larger datasets does not significantly improve adversarial
robustness though it is critical for training ViTs.
5) Adversarial training is also applicable to ViT for training robust models.
Furthermore, feature visualization and frequency analysis are conducted for
explanation. The results show that ViTs are less sensitive to high-frequency
perturbations than CNNs and there is a high correlation between how well the
model learns low-level features and its robustness against different
frequency-based perturbations
Federated Learning for Sparse Principal Component Analysis
In the rapidly evolving realm of machine learning, algorithm effectiveness
often faces limitations due to data quality and availability. Traditional
approaches grapple with data sharing due to legal and privacy concerns. The
federated learning framework addresses this challenge. Federated learning is a
decentralized approach where model training occurs on client sides, preserving
privacy by keeping data localized. Instead of sending raw data to a central
server, only model updates are exchanged, enhancing data security. We apply
this framework to Sparse Principal Component Analysis (SPCA) in this work. SPCA
aims to attain sparse component loadings while maximizing data variance for
improved interpretability. Beside the L1 norm regularization term in
conventional SPCA, we add a smoothing function to facilitate gradient-based
optimization methods. Moreover, in order to improve computational efficiency,
we introduce a least squares approximation to original SPCA. This enables
analytic solutions on the optimization processes, leading to substantial
computational improvements. Within the federated framework, we formulate SPCA
as a consensus optimization problem, which can be solved using the Alternating
Direction Method of Multipliers (ADMM). Our extensive experiments involve both
IID and non-IID random features across various data owners. Results on
synthetic and public datasets affirm the efficacy of our federated SPCA
approach.Comment: 11 pages, 7 figures, 1 table. Accepted by IEEE BigData 2023,
Sorrento, Ital
A Study of Low-Resource Speech Commands Recognition based on Adversarial Reprogramming
In this study, we propose a novel adversarial reprogramming (AR) approach for
low-resource spoken command recognition (SCR), and build an AR-SCR system. The
AR procedure aims to modify the acoustic signals (from the target domain) to
repurpose a pretrained SCR model (from the source domain). To solve the label
mismatches between source and target domains, and further improve the stability
of AR, we propose a novel similarity-based label mapping technique to align
classes. In addition, the transfer learning (TL) technique is combined with the
original AR process to improve the model adaptation capability. We evaluate the
proposed AR-SCR system on three low-resource SCR datasets, including Arabic,
Lithuanian, and dysarthric Mandarin speech. Experimental results show that with
a pretrained AM trained on a large-scale English dataset, the proposed AR-SCR
system outperforms the current state-of-the-art results on Arabic and
Lithuanian speech commands datasets, with only a limited amount of training
data.Comment: Submitted to ICASSP 202
- …